-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
5 changed files
with
167 additions
and
1 deletion.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,159 @@ | ||
<!DOCTYPE html> | ||
<html> | ||
<head> | ||
<link rel="icon" href="../../../assets/icon.ico" type="image/x-icon"> | ||
<title>gorilla</title> | ||
<style> | ||
body { | ||
background-color: #f9f9f9; | ||
font-family: 'Microsoft YaHei', sans-serif; | ||
padding: 20px; | ||
} | ||
|
||
h2 { | ||
color: #333; | ||
text-align: center; | ||
} | ||
|
||
h3 { | ||
color: #666; | ||
} | ||
|
||
p { | ||
color: #333; | ||
line-height: 1.5; | ||
} | ||
|
||
.chat-container { | ||
max-width: 800px; | ||
margin: auto; | ||
padding: 20px; | ||
border-radius: 10px; | ||
background-color: white; | ||
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); | ||
} | ||
img.human { | ||
width: 40px; | ||
height: 40px; | ||
} | ||
.chat-group { | ||
border: 1px solid #ddd; | ||
border-radius: 10px; | ||
padding: 20px; | ||
margin-bottom: 20px; | ||
} | ||
|
||
.chat-entry { | ||
display: flex; | ||
margin-bottom: 20px; | ||
} | ||
|
||
.chat-entry .image { | ||
width: 50px; | ||
height: 50px; | ||
} | ||
|
||
.chat-entry .message { | ||
margin-left: 10px; | ||
padding: 10px; | ||
border-radius: 10px; | ||
} | ||
|
||
.chat-entry.human .message { | ||
background-color: lightblue; | ||
} | ||
|
||
.chat-entry.assistant { | ||
flex-direction: row-reverse; | ||
} | ||
|
||
.chat-entry.assistant .message { | ||
background-color: lavender; | ||
max-width: 90%; | ||
} | ||
</style> | ||
<link href="../../../assets/prism.css" rel="stylesheet" /> | ||
</head> | ||
<body> | ||
<div class="chat-container"> | ||
<h2>Demos of Instruction-Following LLaMA2.<br>Using the gorilla tensorflow dataset.</h2> | ||
<h3>Instruction 1:</h3> | ||
<div class="chat-group"> | ||
<div class="chat-entry human"> | ||
<img class="human" src="../../../assets/human.png"> | ||
<div class="message">I am working on a project where I need to cluster similar images of street art. How can I get the relevant feature vectors from these images for clustering?</div> | ||
</div> | ||
<div class="chat-entry assistant"> | ||
<img class="image" src="../../../assets/gorilla.png"> | ||
<div class="message"> | ||
<strong><<<domain>>>: </strong>Image feature vector, <br><br><strong><<<api_call>>>: | ||
|
||
</strong>hub.KerasLayer<br>('https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4'), <br><br><strong><<<api_provider>>>: </strong>TensorFlow Hub, <br><br><strong><<<explanation>>>: </strong>This API call loads a pre-trained MobileNetV2 model from TensorFlow Hub, which has been trained on the ImageNet dataset. This model is used for extracting feature vectors from images. You can feed your street art images into this model to obtain feature vectors, which can then be used for clustering. | ||
<br><br> | ||
<strong><<<code>>>: </strong> | ||
<pre><code class="language-python">import tensorflow as tf | ||
import tensorflow_hub as hub | ||
model = tf.keras.Sequential([ | ||
hub.KerasLayer('https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4', | ||
input_shape=(224, 224, 3), | ||
trainable=False) | ||
]) | ||
|
||
image = tf.keras.preprocessing.image.load_img('path/to/image.jpg', target_size=(224, 224)) | ||
image_array = tf.keras.preprocessing.image.img_to_array(image) | ||
image_array = tf.expand_dims(image_array, 0) | ||
|
||
features = model.predict(image_array) | ||
</code></pre> | ||
|
||
</div> | ||
<!-- You can continue the conversation here --> | ||
</div> | ||
</div> | ||
<h3>Instruction 2:</h3> | ||
<div class="chat-group"> | ||
<div class="chat-entry human"> | ||
<img class="human" src="../../../assets/human.png"> | ||
<div class="message">My dog is always getting into things that can be dangerous. Can you help me identify the object my dog is about to eat?</div> | ||
</div> | ||
<div class="chat-entry assistant"> | ||
<img class="image" src="../../../assets/gorilla.png"> | ||
<div class="message"> | ||
<strong><<<domain>>>: </strong>Image object detection, <br><br><strong><<<api_call>>>: | ||
|
||
</strong>detector = hub.load('https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2'), <br><br><strong><<<api_provider>>>: </strong>TensorFlow Hub, <br><br><strong><<<explanation>>>: </strong> This API call loads a pre-trained TensorFlow Hub model for object detection in images using the SSD MobileNet V2 architecture. Once the model is loaded, it can be used to detect objects in images, allowing you to identify the object your dog is about to eat. The model is trained on the COCO dataset, which includes a variety of common objects. | ||
|
||
<br><br><strong><<<code>>>: </strong> | ||
<pre><code class="language-python">import tensorflow_hub as hub | ||
detector = hub.load('https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2')</code> | ||
</div> | ||
|
||
</div> | ||
</div> | ||
<h3>Instruction 3:</h3> | ||
<div class="chat-group"> | ||
<div class="chat-entry human"> | ||
<img class="human" src="../../../assets/human.png"> | ||
<div class="message">For arranging my documents, I want to compute the semantic similarity between a set of sentences.</div> | ||
</div> | ||
<div class="chat-entry assistant"> | ||
<img class="image" src="../../../assets/gorilla.png"> | ||
<div class="message"> | ||
<strong><<<domain>>>: </strong>: Text embedding, <br><br><strong><<<api_call>>>: | ||
|
||
</strong>hub.load('https://tfhub.dev/google/universal-sentence-encoder/4'), <br><br><strong><<<api_provider>>>: </strong>TensorFlow Hub, <br><br><strong><<<explanation>>>: </strong> The API call loads the Universal Sentence Encoder model from TensorFlow Hub. This model is designed to encode text into high-dimensional vectors, which can be used for various natural language tasks, including text classification, semantic similarity, clustering, etc. By using this model, you can compute the semantic similarity between a set of sentences, which can help you arrange your documents based on their content. | ||
|
||
<br><br><strong><<<code>>>: </strong> <pre><code class="language-python">import tensorflow_hub as hub | ||
embed = hub.load('https://tfhub.dev/google/universal-sentence-encoder/4')</code> | ||
</div> | ||
</div> | ||
</div> | ||
|
||
<!-- You can continue the conversation here --> | ||
|
||
|
||
<!-- Conversations go here --> | ||
</div> | ||
<script src="../../../assets/prism.js"></script> | ||
</body> | ||
</html> |