Standing on the shoulders of giants, CRIA is a Large Language Model (LLM) that is instruction tuned on Meta Llama 2 (7B, Chat) Model—a top-tier open-source LLM as of writing. The base model powers CRIA to offer engaging interactions and impressive contextual understanding.
CRIA is further fine-tuned on a free colab instance using the CodeLlama-2-20k instruction dataset, enhancing its coding capabilities. Additionally, the inference model is quantized, optimizing it for swift deployment and rapid inference. This means you get efficiency without compromising on quality.
In essence, CRIA is a proof-of-concept, showcasing the potential of instruction tuning even with free, limited resources. Beyond that, it remains well-versed in a broad spectrum of topics, making it an interesting conversation partner. Try CRIA now!
A cria, pronounced as krē-ə, embodies the spirit of a baby llama. It is a homage to its base model, Meta Llama 2.
Hosting a LLM on cloud servers comes with a considerable cost, especially for a project pursued out of interest. Consequently, the inference process is often turned off to manage the costs.
If you're interest to see a live demonstration, don't hesitate to get in touch with me. Moreover, if you have any methods for near-zero cost hosting or even cost-effective alternatives, please don't hesitate to reach out. Your input and ideas are invaluable to me, and I'm keen to engage in discussions with fellow enthusiasts like you.
The goal of CRIA v1.3 is to showcase the potential of fine tuning your own LLM with freely available resources. The initial focus is on establishing a Minimum Viable Product (MVP), with performance considered a secondary concern for this release.
Hence, it is crucial to recognize that this process is highly experimental. Please exercise caution and avoid interpreting CRIA's responses as authoritative advice.
While CRIA is versatile and constantly improving, here are a few examples of how users might interact with CRIA:
Casual Conversations: Engage in friendly and informative conversations on a wide range of topics.
Fun and Entertainment: Have some fun with CRIA's cheerful persona by asking it jokes, riddles, or creative questions.
Coding Assistance:: Seek simple coding help, and work on small coding tasks.
The creation of CRIA is a fascinating journey, and you can find comprehensive technical details about its development in our GitHub documentation. Please visit our GitHub repository for in-depth insights into CRIA's architecture, codebase and design decisions that went into building this chatbot.
Absolutely! The journey with CRIA is an ongoing endeavor. While there's no set timeline for future feature releases, We're committed in improving this project.
The goal of CRIA is to deploy an functional MVP that is a conversational chatbot with a cheerful persona. We noted that the current instruction dataset does not match this purpose. Thus, our roadmap includes a comprehensive assessment of the current instruction tuning process, as well as the introduction of a custom dataset to fine-tune the base model to our specific use case. A thorough model evaluation is also part of the roadmap to enhance the understanding and capabilities of CRIA.
There are several ways you can contribute to the development and support of CRIA:
Provide Feedback: Your feedback is incredibly valuable in helping us improve CRIA. If you have suggestions, ideas, or encounter any issues, please don't hesitate to email us or open an issue on our GitHub repository.
Contribute Code:If you're a developer, you can actively contribute to CRIA's development by submitting pull requests on our GitHub repository. We welcome contributions in the form of bug fixes, new features, and improvements to existing functionality.
Support Hosting Costs: Running a chatbot like CRIA involves cloud hosting expenses. You can help support the project's sustainability by contributing to our Ko-fi fund. Your support will directly contribute to covering the backend hosting costs and ensuring the availability of CRIA for users.