If you've been using AI tools for real work over the past two years, you've probably noticed something uncomfortable: AI doesn't remember anything.
Every new conversation starts from scratch. It doesn't remember which angles you ruled out last time, what your readers care about, which phrasing you hate, or what your brand voice sounds like. You're effectively retraining it every single session.
For solo developers and indie hackers, this problem cuts deeper than it does for teams.
The Context Problem Gets Worse When You're Alone
In a company, AI tools are supplements. Team members provide context through documents, Slack history, and established patterns. Someone maintains the product bible. Someone else handles code reviews and makes sure new engineers understand the architecture. The team's process compensates for AI's limitations.
As a solo developer, you don't have that buffer.
You might be writing code in the morning, drafting content after lunch, analyzing user data in the afternoon, and building strategy in the evening — all in one day, all with AI that starts fresh every time. The ceiling of what AI can help you with is determined by how precisely you can describe the context in each conversation.
The irony: sometimes describing the context takes more time than the actual work.
Here's a concrete example. Say you're building a SaaS product. You ask AI to write an onboarding email sequence. It comes out reasonable — professional, clear, technically correct. But it's generic. To get it to sound like your brand, you'd need to tell it things like: "We're a developer-tools company. Our tone is direct but not cold. We use humor sparingly and only when it adds clarity. We're B2B but our users are individual developers, not procurement teams." That's probably 30-40 words of context.
Now for the next task — a Changelog update — you'd need to repeat most of that. And for a Twitter thread announcing the update, most of it again. And for the reply to a user complaint, context once more.
The aggregation of these micro-contexts across a full day of AI-assisted work is significant. Most people don't track it precisely, but the feeling is familiar: AI is helpful but somehow exhausting.
A Day in the Life: What This Actually Feels Like
Let me paint a fuller picture of what a typical AI-assisted workday looks like for a solo developer right now.
9:00 AM — You open your code editor. You ask AI to explain a piece of legacy code you wrote six months ago. It helps, but you have to re-establish that this is a B2B API product, that performance is critical, that you follow a specific architectural pattern. Five minutes of context, five minutes of work.
10:30 AM — You switch to writing a pricing page. New conversation. You have to re-explain that you're developer-focused, that your buyers are engineers not executives, that your pricing model is usage-based. The AI helps you write it. Another ten minutes of context setting.
12:00 PM — You need to respond to a user support issue about rate limiting. Fresh conversation. You explain the technical context again. The AI writes a response that's technically accurate but doesn't match your tone — too formal, too corporate. You spend twenty minutes editing it to sound like you.
2:00 PM — You want to analyze some user behavior data. You open a data tool, paste in some numbers, ask AI to look for patterns. The analysis is helpful, but you have to remind it about your product's specific use cases so it doesn't flag false anomalies.
4:00 PM — You draft a tweet about a new feature. The voice is wrong again. Too promotional. Not your usual angle. You rewrite it yourself.
6:00 PM — You need to review a contract for a new API integration. You paste it into an AI tool. It reviews the contract correctly but gives you advice that's appropriate for a large enterprise, not a two-person startup. You spend thirty minutes re-calibrating it with your company's stage, risk tolerance, and resources.
By the end of the day, you've had eight or nine separate AI interactions. Each one was productive in isolation. None of them accumulated. None of them built on the previous work. And each one required re-establishing context that felt like it should have been remembered.
Multiply this by five days a week, fifty weeks a year. The time cost isn't dramatic in any single session. It's the compounding that makes it significant.
The Financial Cost Is Real, Even If It's Invisible
Most solo developers don't put a dollar figure on context re-establishment. But you can.
If you spend, conservatively, ten minutes re-establishing context at the start of each significant AI-assisted task, and you do five such tasks per day, that's fifty minutes per day. At a fully-loaded hourly rate of $100 (modest for a skilled solo developer), that's about $83 per day in context tax. Five days a week, fifty weeks a year: roughly $20,000 per year in efficiency loss.
That's an estimate with significant error bars. The real number might be half that, or double. But the order of magnitude is real. Context re-establishment is not free. It's a meaningful cost that comes out of the value AI is supposed to deliver.
The reason this cost is invisible is that it's not billed separately. It shows up as "AI is helpful but I still feel behind." It's attributed to the pace of work, or to the complexity of the product, or to the normal friction of running a business. It's almost never attributed to the specific design choice of AI systems that don't maintain state between sessions.
This invisibility is a problem. When costs are invisible, they're not managed. And when they're not managed, they compound.
Why This Problem Gets Less Attention Than It Deserves
It's worth noting that this problem — the context accumulation problem — hasn't gotten as much attention as it deserves, relative to its real impact.
Part of the reason is that the individual AI tools are genuinely impressive. It's easy to focus on what they do well (produce good output) rather than on the overhead around them (producing that output requires re-establishing context every time).
Part of the reason is that the problem is hard to measure. How do you quantify "time spent re-explaining context"? Most people don't. They just feel tired at the end of the day and attribute it to the normal pace of work.
Part of the reason is that the people most affected — solo developers and indie hackers — are also the people least likely to complain publicly. They're solving their own problems with whatever tools work. If there's friction, they absorb it rather than write about it.
And part of the reason is that the problem is structural. AI companies are incentives to optimize for single-session quality. They benchmark their models on how well they perform in isolated tasks. They don't have good metrics for "how much context does this tool accumulate over time" or "how much overhead does context-switching impose on a multi-session workflow."
The result is that the context accumulation problem is both ubiquitous and underdiscussed. Almost every solo developer I've talked to recognizes the feeling. Almost none of them have found a satisfying solution.
What Existing Tools Solve — And What They Don't
The past year gave us no shortage of AI tools. ChatGPT for writing, Copilot for code, Claude for analysis, Gemini for research, Perplexity for discovery. Per-seat AI assistants, project-specific AI tools, AI features built into existing platforms. The landscape is genuinely rich.
Individually, each tool is impressive. The best AI writing tools produce better first drafts than most human writers. The best AI coding tools understand your codebase in ways that generic ChatGPT never could. The best AI research tools synthesize information across thousands of sources.
But when you're actually working — not benchmarking, but doing real product work — the friction isn't "no AI tools available." It's the operational overhead of making AI work as part of a coherent process.
The Connection Problem
How do these tools connect to each other? The honest answer for most solo developers right now is: they don't. You use ChatGPT for writing and Copilot for code and a spreadsheet AI for data analysis and a research AI for competitive intelligence. Each tool lives in its own silo. Each tool starts from scratch.
If you want to take output from one AI tool and use it as input for another, you copy and paste. If you want context from one AI session to inform another, you manually carry it over. The tools are impressive individually; the system of tools is not integrated.
This is the connection problem: AI tools that don't connect to each other impose a manual integration cost that scales with the number of tools and the complexity of the workflow.
The Accumulation Problem
How do workflows accumulate rather than just producing one-off results?
A workflow that "accumulates" means: the work you do today builds on the work you did last week. The AI gets better at helping you because it has more context. Patterns emerge that you can exploit. The system gets more efficient over time.
A workflow that doesn't accumulate means: every session is essentially the same starting point. The AI doesn't know what you did last week unless you explicitly tell it. There are no patterns to exploit — you're starting fresh each time.
The accumulation problem is the difference between a tool that gets more valuable the longer you use it, and a tool whose value is capped at whatever it can do in a single session.
Most current AI tools are capped at single-session value. This is fine for one-off tasks. It's a real limitation for ongoing work.
The Re-Entry Problem
How do you build on work you did three weeks ago without re-explaining everything?
This one is insidious. If you used AI to help you design a pricing model three weeks ago, and you want to revisit it today, you have essentially two options: paste in a summary of the previous conversation (if you saved it), or re-explain the context from scratch.
Most people don't save the summaries. So they re-explain. Which takes time. Which reduces the probability that they'll bother revisiting and improving past work. Which means past AI-assisted work doesn't get built upon — it gets abandoned.
This is a specific case of the accumulation problem, but it deserves its own attention because the cost is invisible. You don't notice all the times you decided not to revisit something because the re-entry cost felt too high.
The Tool-Switching Tax
There's a specific type of friction that solo developers feel more acutely, and that the AI industry has largely failed to address: tool switching.
In a team, tool switching is someone else's problem. A designer hands off to an engineer. An engineer hands off to a content person. Each person lives in their own tool ecosystem and optimizes for their own workflow. The handoff between people is someone else's coordination problem.
As a solo developer, you're doing all of this yourself. And every time you switch from your code editor to a writing tool to a data analysis notebook to a research interface, there's a cost.
This cost has several components:
The interface tax: you have to orient to a new interface. Find where you are. Find where the relevant functions are. Remember how this particular tool's version of AI works.
The context tax: you have to bring your context into the new tool. Copy and paste from one window to another. Or, if the tool supports file access, figure out which files are relevant and make sure they're accessible.
The AI personality tax: different AI tools have different default voices, different response patterns, different strengths and weaknesses. Switching tools means switching mental models for what the AI will do well.
The state tax: in your code editor, you know exactly where you are in the codebase. In your writing tool, you know exactly what document you're working on. When you switch to a research AI, you're in a new context with its own state, and you have to figure out how the research connects to the code or the writing.
Individually, each of these taxes is small. Aggregated across a day of frequent tool switching, they become significant.
The AI industry has largely addressed "AI produces bad output." It has not addressed "AI as part of a coherent working process for one person doing many roles." These are related but different problems.
What Floatboat Is Actually Building Toward
Floatboat's approach starts from a different premise: the AI working environment itself should remember how you work, not just execute your current instruction.
They call this a "workspace for AI-native work" — an environment where AI is not a separate tool you invoke, but a presence that persists across your work. The goal is to make AI's context window span not just your current conversation, but your entire working history.
They break this into three conceptual layers.
Layer 1: The Tacit Engine — Accumulated Context Without Explicit Configuration
The idea behind the Tacit Engine is that AI should accumulate a model of your preferences over time, without you explicitly training it or constantly confirming its learning.
Think of it like how a good executive assistant who's worked with you for six months starts to anticipate what you want. They don't need to ask you every time what your preferred way of handling a client call is — they've learned. They don't need a briefing before every meeting — they remember the context. They have a model of your priorities, your communication style, your decision-making patterns.
Floatboat is trying to build that kind of accumulated context for AI interactions.
The key phrase is "without explicitly configuring it." There are AI tools that ask you to set preferences, fill out onboarding questionnaires, configure your communication style. These can work, but they put the burden of articulation on you. And most people can't accurately articulate their own preferences in advance — they recognize good output when they see it, but they couldn't have specified it upfront.
The Tacit Engine's proposition is different: you use the tool, it watches how you work, it infers patterns from your behavior, and it applies those patterns in future sessions without you having to re-articulate anything.
In practice, this would mean:
The AI gradually learns that when you write for developers, you prefer short sentences and concrete code examples over conceptual explanations. When you're drafting a pricing page, you always want to see the value proposition before the feature list. When you're writing technical documentation, you start with the problem before the solution. When you're responding to a support ticket, you lead with empathy before the technical explanation.
These aren't things you'd fill out in a preferences form. They're things the AI picks up from watching you work over time.
The difference between this and just "giving better prompts" is significant. Better prompting requires you to explicitly establish context at the start of each session. The Tacit Engine's proposition is that the context should be built up across sessions, automatically, without your deliberate effort.
A real example: after three weeks of using the Tacit Engine for writing, the AI knows that when you write anything related to pricing, you always want competitor comparisons in a table format. You never have to tell it this twice. It watches you reformat competitor pricing into tables every time, and after enough observations, it starts doing it without being asked.
Whether this actually works as described depends heavily on implementation details. Some things worth thinking about:
How does the system distinguish between genuine patterns and noise? If you write one unusually formal email, does it update your general communication style model, or does it recognize that as an outlier specific to that situation?
How does the system handle conflict between different types of work? When you're writing marketing copy and writing technical documentation, you probably want different voices. How does the Tacit Engine know which mode applies?
How transparent is the learning? If the AI is inferring things about your preferences, can you see what it's inferred? Can you correct it when it gets something wrong?
These are hard problems. The fact that they're hard doesn't mean Floatboat can't solve them — it means the implementation details matter more than the concept.
Layer 2: Combo Skills — Reusable Workflows That Compound
The second layer is what Floatboat calls Combo Skills — reusable chains of AI operations that go beyond single-prompt, single-response interactions.
The dominant model for AI interaction right now is: you provide input, AI produces output, the interaction ends. If you want to do something complex, you break it into multiple steps and run them sequentially, manually coordinating between steps.
Combo Skills proposes a different model: define a sequence of AI operations once, and then trigger that sequence with a single command. The results of each step automatically flow into the next step. You set it up once, you benefit from it repeatedly.
Let me give a concrete example of how this would work in practice.
The naive way to write a technical blog post with AI looks like this:
Open a research AI, paste in a topic, get an outline
Copy the outline
Open a writing AI, paste the outline and context, get a draft
Read the draft, identify weak points
Open the writing AI again, give it the feedback, get a revision
Repeat steps 4-5 until satisfied
Copy the final draft to your publishing tool
Each step is a separate interaction. Each step requires re-establishing context. Each step's output has to be manually moved to the next step. The context you built in step 1 (what the article is about, who it's for) has to be re-established in step 3. The writing style preferences you implied in step 3 have to be re-established if you switch to a different AI tool.
A Combo Skill for technical blog posts might work like this:
You define the skill once: research → outline → first draft → internal review (AI checks for logical consistency, clarity, technical accuracy) → revision → final polish (AI applies your writing style)
Now, whenever you want to write a technical blog post, you trigger this Combo Skill. The AI runs through each step automatically, with context flowing from one step to the next, and with the Tacit Engine applying your writing preferences at each step without you re-explaining them.
The output of the final step is a polished draft ready for review. Not a perfect draft, necessarily — you'd still review it. But a draft that has gone through a structured process with your context and preferences applied throughout.
The compounding effect is significant. Not compounding in the sense of "this saves me 20 minutes on this one post." Compounding in the sense of "this saves me 20 minutes every time I write a technical blog post, forever."
And the compounding doesn't stop there. Over time, the Tacit Engine learns which types of outlines work better for your audience, which revisions you typically make, which aspects of the writing you always want to revisit. The Combo Skill gets more aligned with your preferences with each use.
The harder question is: how flexible is the system for defining these workflows? Real workflows have edge cases. They have exceptions. They have situations where one step produces unexpected output and the next step needs to adapt.
A Combo Skill system that only works for idealized, textbook workflows is not useful for real solo developer work. A Combo Skill system that handles the full complexity of real workflows — with flexibility, error handling, and graceful degradation — would be genuinely valuable.
Some questions that matter: Can you mix and match operations freely, or are you constrained to pre-defined patterns? What happens when one step in a chain produces unexpected output — does the whole chain fail, or does it adapt? How do you debug a Combo Skill that's producing wrong results? How easy is it to modify a Combo Skill when your workflow changes?
The difference between those two outcomes is almost entirely in the execution quality. And it's too early to know where Floatboat lands on that spectrum.
Layer 3: The Unified Workspace — One Environment, Not Many Tools
The third layer is the workspace itself — files, browser, AI panels, all in one unified view with minimal window switching.
This part is the easiest to describe and the hardest to evaluate. On paper, it sounds obvious: of course it would be nice if your code editor, your writing tool, and your AI assistant lived in one window. Less context switching. Less overhead.
On practice, the answer depends entirely on execution quality.
The cognitive overhead of context switching is real. Anyone who's spent a day in a cluttered multi-monitor setup and then switched to a clean, focused single-monitor setup understands this. The cost of context switching is not just the time to physically switch windows — it's the mental overhead of re-orienting to each new context. Research has consistently shown that context switching has a measurable overhead beyond just the time of the switch itself.
But a unified workspace that is itself cluttered, slow, or poorly organized is worse than separate best-in-class tools that each do one thing well. If the unified workspace introduces more friction than it removes, you've made the problem worse rather than better.
This happens more often than the marketing suggests. Many "unified" tools suffer from the same problem: because they try to do everything, none of it is as good as the best-in-class alternative. The code editor is decent but not as good as VS Code. The writing tool is functional but not as good as iA Writer. The AI is capable but not as capable as Claude. The overall product is worse than the sum of its parts.
Floatboat's pitch is that they've designed their workspace specifically to reduce friction, not add to it. The specifics of what that means in practice — how they handle the tradeoff between comprehensiveness and excellence in each function, how they organize the interface, how they prevent feature bloat from creating new clutter — would need to be evaluated through actual use.
This is the layer where the "try it before you decide" advice applies most directly. Workspace quality is almost impossible to assess from documentation or descriptions. It has to be felt in daily use.
A Closer Look at What Each Layer Actually Solves
It helps to be explicit about what problem each layer solves, because together they address a coherent diagnosis of what's broken about current AI tooling for solo developers.
The Tacit Engine addresses the context re-establishment problem. It makes AI smarter over time by accumulating implicit context, rather than requiring you to explicitly re-establish context at the start of each session. It turns "the AI starts from scratch every time" into "the AI starts from where you left off last time."
Combo Skills address the workflow accumulation problem. They make AI more efficient over time by capturing recurring patterns of AI use, rather than requiring you to manually orchestrate each step of a complex workflow every time you do it. They turn "every workflow starts from scratch" into "this workflow picks up where the last instance left off."
The Unified Workspace addresses the tool-switching tax. It makes the environment itself less costly to operate in by reducing context-switching overhead, rather than expecting you to manually manage a multi-tool workflow. It turns "four windows to manage" into "one environment to live in."
Separately, each layer is addressing a real but incomplete problem. The context re-establishment problem is real, but solving it alone doesn't help if your workflows are still manual. The workflow accumulation problem is real, but solving it alone doesn't help if you're still paying the tool-switching tax. The tool-switching problem is real, but solving it alone doesn't help if each tool still starts from scratch.
Together, they constitute a coherent alternative to the current model of "use a collection of best-in-class AI tools and manage the overhead of integration yourself."
Whether that coherent alternative actually works — and works well enough to be worth switching to — depends on how well each layer is implemented. All three layers need to work. A great Tacit Engine doesn't help if the Unified Workspace is too cluttered to use. A powerful Combo Skill system doesn't matter if the context it operates on is always wrong because the Tacit Engine isn't learning correctly. A clean workspace doesn't help if the AI in it starts from scratch every session.
This is why "interesting concept" doesn't automatically mean "worth switching to." The whole has to be greater than the sum of its parts, and for that to be true, each part has to be genuinely good.
How This Compares to What's Already Out There
It's worth being direct about what Floatboat is competing with and how it positions itself relative to existing solutions.
The most honest comparison is with the general approach of "use a collection of best-in-class AI tools and manage the integration overhead yourself." For most solo developers today, this is the default. ChatGPT for writing, Copilot for code, Claude for analysis, Perplexity for research. Each tool is good at what it does. The integration overhead is absorbed as a cost of using the best tool for each job.
Floatboat's proposition is that this integration overhead can be systematized and reduced — that the tools should be designed to work together as a system, not just as individual components. The goal is not to beat the best individual tool at its specific task, but to provide a better overall system for solo developer workflows.
Here is how Floatboat compares to specific existing approaches:
vs. ChatGPT with disciplined prompting
A disciplined ChatGPT user can approach similar outputs to what Floatboat promises. With careful context management — maintaining a context document, structuring prompts carefully, keeping track of patterns across sessions — you can get ChatGPT to behave somewhat like a system that accumulates context.
The cost is entirely in manual work. You have to do the integration. You have to remember which context goes with which type of task. You have to maintain and update the context document as your product evolves. You have to catch when the AI is diverging from your preferences and correct it.
Floatboat's proposition is that this manual integration work can be automated — that the system can learn context implicitly rather than requiring explicit management. The value proposition is the same output for less work, or better output for the same work.
For someone who's already doing the manual work and finding it effective, Floatboat offers a way to automate that work. For someone who isn't doing the manual work because the overhead is too high, Floatboat offers a better starting point.
The honest limitation: disciplined prompting with manual context management can sometimes outperform implicit learning, because the human is explicitly controlling what context applies when. The question is whether the efficiency gain from automation outweighs the precision loss from implicit learning.
vs. Cursor, Windsurf, and AI-first code editors
Cursor and Windsurf have made significant progress on the code-specific version of the context problem. If you work primarily in code, these tools have genuinely solved the "AI that remembers your codebase" problem in ways that generic ChatGPT cannot match.
The limitation is scope. These tools are optimized for code. They work less well for writing, analysis, research, or the mixed-mode work that many solo developers actually do.
A solo developer who spends 40% of their time in code, 30% in writing, 20% in analysis, and 10% in research will still have the non-code portions of their work operating with the context problem. And the cross-functional workflows — writing documentation that references code, analyzing data that comes from the codebase, researching decisions that affect the architecture — will still have integration overhead even if the code portion is solved.
Floatboat appears to be attempting a broader version of what Cursor has done for code — but for the full range of solo developer work. The question is whether breadth at this level of integration is achievable with current technology, or whether it requires so much customization for different types of work that the scope becomes a liability rather than an advantage.
vs. Notion AI, Craft, and document-centric AI tools
Notion AI and similar document-centric tools have solved context for document-based work reasonably well. If your work is primarily documents — notes, wikis, long-form writing — these tools handle context better than generic ChatGPT.
The limitation is that document-centric tools don't obviously solve the cross-tool workflow problem. They're good at documents but weak on code and weak on cross-tool workflows. A solo developer who works across code, documents, and other types of content still has the integration problem even if the document portion is handled well.
Floatboat's broader scope — if it works as described — would address the document tools' cross-tool weakness. Whether it actually does depends on how well the workspace integration is implemented and whether the code support is as good as the document support.
vs. Building your own system
Many solo developers have built some version of this themselves. Zapier or Make for workflow automation. API chains connecting different AI services. Custom scripts that stitch together different tools. A personal wiki that maintains context across sessions.
The advantage of building your own is control. You know exactly how every piece works. You can customize it precisely to your workflow. You can rip out any component and replace it with a better alternative without being locked in.
The disadvantage is switching costs and maintenance burden. A custom system you built yourself takes time to set up and time to maintain. When something breaks, you fix it. When an API changes, you update it. When you want to add a new capability, you build it yourself. And if you decide to switch to a different approach, there's significant exit cost.
Floatboat's proposition is that a purpose-built, maintained system is worth paying for — because the switching costs of building and maintaining your own are higher than the cost of a subscription, and because a dedicated team iterating on the product will outpace what you can build and maintain on your own.
Whether this is true depends on how much you value your own time, what your tolerance for maintenance overhead is, and whether Floatboat's execution quality justifies the lock-in.
The Open Questions Are Real — And Worth Sitting With
Before drawing conclusions about Floatboat, it's worth being clear about the open questions. These aren't reasons to dismiss the product. They're reasons to approach it with appropriate caution and to evaluate it against criteria that actually matter for your use case.
Does the Tacit Engine learn useful signals or noise?
The idea of implicit context learning is compelling. The execution risk is real.
If you write one unusually formal email, should that update your general communication style model, or should the system recognize it as contextually appropriate for that specific situation? If you've been prototyping for two weeks and your code has been messy, should the AI assume that's your preferred style, or should it recognize that as temporary? If you switch between project contexts — a client project and a personal project, or a technical project and a marketing project — how does the system know which context applies?
These aren't rhetorical questions. They're the specific points where implicit learning can go wrong. The difference between a system that learns from genuine patterns and one that learns from noise is almost entirely in implementation quality.
AI systems that learn from behavior are powerful when they learn correct patterns. They're misleading when they learn noise. And the feedback loop can be self-reinforcing: if the AI learns a wrong pattern, it produces outputs that reinforce that pattern, which gives the AI more data that confirms the pattern, which deepens the error.
The honest advice: pay attention to how the Tacit Engine behaves after a month of use, not after a week. Short-term behavior is easy to evaluate. Whether the learning compounds positively or negatively over time is what matters.
Are Combo Skills flexible enough for real workflows?
The Combo Skill concept is easy to understand in the abstract. It's much harder to evaluate without using it.
Real workflows have exceptions. They have edge cases. They have situations where one step in a chain produces unexpected output and the next step needs to adapt. They have situations where the right workflow depends on information that only emerges during the workflow itself.
A Combo Skill system that works for textbook examples is not useful for real solo developer work. A Combo Skill system that handles the full complexity of real workflows — with flexibility, error handling, observability, and graceful degradation when the workflow produces unexpected results — would be genuinely valuable.
The difference between those two outcomes is large, and it's almost entirely in the quality of the implementation. This is something you'd need to evaluate through sustained use.
Is the unified workspace actually lower friction?
On paper, a unified workspace sounds like less friction. In practice, it depends entirely on execution quality.
The specific things to evaluate: Is the interface clean, or cluttered? Does the system perform well when you're working across large codebases? Is the file management intuitive? Does the unified view actually reduce cognitive load, or does it just move the complexity somewhere else? Is the code editor as good as the dedicated code editors you're currently using?
These are not rhetorical questions. They're the questions you'd need to answer through sustained use before deciding whether the workspace actually delivers on its promise.
What does migration look like?
If Floatboat works for you, great. The question worth asking before you commit is: what does it look like if you decide to leave?
Combo Skills you've designed, context accumulated over months, workflows customized to your patterns, files organized in their system — all of this has exit cost. Not zero, even if they don't have explicit lock-in mechanisms.
This matters less if the tool is clearly better than alternatives and you're committed to the approach. It matters more if you're evaluating Floatboat as one option among several, and you're not yet sure whether AI-accumulated-context is the right approach for your work.
Who is this actually for?
This is the most important question, and it's underdiscussed.
Floatboat is probably not for you if: you're primarily a coder and you've already found Cursor or Windsurf, you work in a team where context is managed through processes and documents rather than individual accumulation, you prefer using best-in-class individual tools even with the integration overhead, or the context problem isn't actually your bottleneck.
Floatboat might be for you if: you do mixed-mode work across code, writing, and analysis, you don't want to manually manage context across sessions, you've tried the tool-switching approach and found the overhead significant, or the idea of accumulated AI context sounds transformative rather than incremental.
The honest summary is that Floatboat is addressing a real problem with a coherent and ambitious approach, but whether it executes well enough to justify switching costs can only be determined through actual use over time. The problem is worth watching and the product is worth evaluating, but the commitment should be made cautiously.
The specific concern here is whether Floatboat is truly the answer to this problem, or if it's just the most visible current attempt. Other companies are likely developing similar capabilities, so the competitive landscape could shift significantly. For now though, Floatboat deserves serious consideration as an early mover in this space.
Honest Verdict: Is It Worth the Switch?
Here is the honest framework for deciding.
Floatboat is probably not for you if: you are primarily a coder and have already found Cursor or Windsurf and that is your main pain point; you work in a team where context is managed through processes and documents rather than individual accumulation; you prefer using best-in-class individual tools even with the integration overhead; or the context problem is not actually your bottleneck.
Floatboat might be worth a serious look if: you do mixed-mode work across code, writing, analysis, and strategy; you have tried the tool-switching approach and found the overhead significant enough to matter; you have felt the specific pain of re-explaining context across sessions and it is a daily frustration rather than an occasional nuisance; or the idea of accumulated AI context sounds transformative rather than incremental.
The honest summary: Floatboat is addressing a real problem, with a coherent and ambitious approach. Whether it executes well enough to justify the switching costs is a different question. That question can only be answered through use over time.
The problem is worth watching. Floatboat is worth evaluating with appropriate skepticism. And the commitment to trying it should be made cautiously, with clear criteria for what working looks like for your specific use case.
What to Evaluate Before You Commit
If you have decided to take Floatboat seriously, here is what to pay attention to in your evaluation.
First: the Tacit Engine after one month, not after one week. Early behavior is easy to evaluate. Whether the learning compounds in a useful direction over time is what matters. Set a calendar reminder to reassess after 30 days of real use, not after the first session.
Second: one Combo Skill that you use daily. Pick the workflow you do most often — not a toy example, not a theoretical use case, but the actual thing you do every week. Build it, use it, see if it produces better results than doing it manually. If it does not work for your real workflow, it will not work in general.
Third: the workspace under real load. Not just opening it and feeling pleased with the clean interface. Actually work in it for a full day. See if the code editor handles your codebase the way you expect. See if the writing tool supports your actual writing workflow. See if the file management makes sense for how you organize your work.
Fourth: what happens when you try to leave. Before you commit significant workflow energy to Floatboat, simulate the exit. Export your Combo Skills. See how portable they are. Check whether the context the Tacit Engine has built is accessible or proprietary. This is not decisive — some lock-in is inevitable with any tool — but knowing the exit cost before you need it is always better than discovering it after.
These four things — long-term learning quality, real workflow performance, workspace under load, and exit cost transparency — are what separates a tool you should bet your workflow on from a tool you should watch with interest.









![Defluffer - reduce token usage 📉 by 45% using this one simple trick! [Earthday challenge]](https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiekbgepcutl4jse0sfs0.png)


