Anyone paying attention to the current state of AI is aware of the exploding need for compute resources. Many of today’s engineers have now discovered the wonderful land of not managing your own infrastructure. There is a growing number of companies that now enable you to run your models or other compute-intensive workloads on their managed cloud by offering “serverless” GPUs and other performant compute options.
Applications that enable a natural language interface between people and data will be at the forefront of LLM enterprise adoption. At Scale, we launched a chatbot to increase access to information about our firm, but we also built it ourselves to gain an understanding of the solutions space. We've just published a blog post where we explore the architecture and share conclusions from this experience. We're also making the full source code available and free to use.
it is nearly impossible to define alignment, let alone achieve it. Human beings, in all their variance, capacity, and folly, make up the entire training pipeline, and we take such comfort in the fact that we call them all by the same name. But every human is individual, and models will change as their trainers do.