BayJarvis News

9th March 2024 #

BitNet Transformer: Scaling 1-bit Transformers for Large Language Models

BitNet Transformer, a architecture that scales 1-bit Transformers for large language models. BitNet Transformer achieves competitive performance while substantially reducing memory footprint and energy consumption compared to state-of-the-art 8-bit quantization methods and FP16 Transformer baselines.

Key Features:

  • BitLinear: A drop-in replacement for the nn.Linear layer in PyTorch, enabling the training of 1-bit weights from scratch.
  • Scalable and Stable: BitNet Transformer is designed to be scalable and stable, capable of handling large language models efficiently.
  • Competitive Performance: Achieves competitive results in terms of perplexity and downstream task accuracy compared to baselines.
  • Significant Energy Savings: Provides substantial energy cost reductions, especially as the model size scales up.
  • Scaling Law: Exhibits a scaling law akin to full-precision Transformers, suggesting its potential for effective scaling to even larger language models.

Availability:

  • GitHub: The code and implementation details are available on GitHub.
  • Blog Post: For a detailed overview and analysis of BitNet Transformer, please refer to our blog post.

5th December 2023 #

Launch of EcoAssistant: Advancing AutoGen for Superior Code-driven Q&A

EcoAssistant, utilizing AutoGen for enhanced code-driven question answering and leveraging advanced AI techniques for iterative code refinement and an assistant hierarchy to manage varying levels of query complexity.

Project Highlights:

  • Iterative Code Refinement: Employs sophisticated algorithms to refine responses for increased accuracy.
  • Assistant Hierarchy: Structured system to handle queries at different complexity levels, ensuring precise and relevant answers.
  • Use of Past Queries: Incorporates successful past queries to improve response generation and efficiency.

Availability: The documentation and code available on GitHub. For further details, refer to the project's blog post:Implementing EcoAssistant: Leveraging AutoGen for Enhanced Code-driven Question Answering.

1st December 2023 #

Release of Zephyr's Mistral DPO Training Framework

The Zephyr's Mistral DPO training framework, based on distilled direct preference optimization (dDPO) for language model alignment, has been released. It introduces an efficient method to fine-tune language models using Direct Preference Optimization, focusing on human value alignment. The framework features robust configuration options, specialized dataset handling, and a tailored training process, all designed to enhance model responsiveness and relevance. Mistral DPO stands out as a pivotal advancement in AI, aiming for models that not only understand language but also grasp human intentions.

Details on GitHub: Zephyr dDPO Training and Blog: Harnessing Zephyr's Breeze: DPO Training on Mistral-7B-GPTQ for Language Model Alignment.

28th November 2023 #

Release Zephyr 7B GPTQ Model Fine-tuning Framework with 4-Bit Quantization

Zephyr's new framework enhances GPT-Q model performance through fine-tuning and 4-bit quantization, tailored for efficient chatbot interactions.

Availability: The framework is open for exploration and contribution on BayJarvis llm github repo and BayJarvis Blog, offering a new avenue for enhancing chatbot solutions.

Framework Highlights:

  • Fine-tuning Workflow: Utilizes zephyr_trainer.py for data preparation and model training, incorporating LoRA modules and quantization for optimized performance.

  • Efficiency & Adaptability: Implements gradient checkpointing and precise training arguments, ensuring responsive and effective model behavior.

  • Inference Capability: Demonstrated by finetuned_inference.py, the model delivers real-time, context-aware responses, ideal for support scenarios.

18th November 2023 #

nanoDPO v0.1 Release, a pioneering implementation of Direct Preference Optimization (DPO) for time series data, inspired by "Direct Preference Optimization: Your Language Model is Secretly a Reward Model," the cutting-edge DPO approach in language model fine-tuning.

Key Features:

  • Causal Transformer and LSTM Integration: Incorporating Causal Transformer and LSTM models to handle time series data effectively.
  • DPO Algorithm Implementation: Direct Preference Optimization for nuanced understanding and prediction of time series trends.
  • DPO and Multi-Class Trainers: Two distinct training models catering to different time series analysis requirements.
  • Customizable Training Configurations: Enhanced flexibility with adjustable learning rates, batch sizes, and model specifications.
  • Robust performance metrics including accuracy and loss visualizations.
  • Compatibility with popular machine learning tools like PyTorch and wandb.

Documentation:

For more information, visit the GitHub README and the detailed Documentation.

6th November 2023 #

nanoPPO v0.15 Release, bringing significant enhancements to the Proximal Policy Optimization (PPO) algorithm tailored for reinforcement learning tasks.

What's New in v0.15?

  • Actor/Critic Causal Attention Policy: A new policy framework to enhance decision-making processes.
  • Custom Learning Rate Scheduler: Introducing a version number and a custom scheduler for fine-tuning the learning rate during agent training.
  • Gradient and Weight Inf/Nan Checks: Added safeguards against infinite and NaN values in gradients and weights to improve stability.
  • Enhanced Training Mechanism: The training script now utilizes average rewards and includes a new cosine learning rate scheduler for iterative adjustment.

Additional Improvements:

  • Debug flag for NAN detection in model parameters.
  • Use of torch.nn.utils.clip_grad_norm_ for gradient clipping.

Documentation:

For a full overview of the new features and improvements, please refer to the GitHub README and the detailed Changelog.

7th October 2023 #

nChain 0.12 Release unveils a Python package specifically crafted for creating LLM bots over a flexible and extensible dataset.

Features & Enhancements:

  1. Sentence Transformers Embedding: By harnessing the capabilities of sentence_transformers, nChain delivers superior text embeddings. This integration ensures that your textual data is transformed into accurate and high-quality vector representations.
  2. Annoy Index for Embedding Search: With nChain, search operations are a breeze, thanks to the integration of the Annoy index. This feature promises swift and precise searches, streamlining the embedding retrieval process.
  3. ArXiv Paper Search Example: To offer a glimpse into the practical potential of nChain, we have incorporated an example that demonstrates its prowess in searching through arXiv papers. This hands-on experience reveals the precision and efficiency that is the hallmark of nChain.

For an in-depth exploration of this release, we recommend visiting the Github readme and the Github release notes.

19th September 2023 #

nanoPPO 0.13 Release the Proximal Policy Optimization (PPO) algorithm for reinforcement learning is now available. Initially supporting discrete action spaces in v0.1, the latest v0.13 has expanded its support to continuous action spaces, catering to a broader spectrum of applications. To aid users in comprehending the training process, the release is equipped with examples that demonstrate how agents can be trained across different environments. Besides MountainCarContinuous, two unique customized environments, namely PointMass1D and PointMass2D, have been introduced. These are specifically designed to facilitate the convenient testing of PPO agent training. An initial test suite is incorporated to maintain high standards of code quality and ensure consistent functionality. For a comprehensive overview, please refer to the Github readme and the Github release notes.