[Full-stack]
AirBnB Clone
A full-stack Airbnb clone built from scratch at ALX/Holberton — dual storage backends (file JSON and MySQL via SQLAlchemy), a REST API with Swagger docs, a Flask/Jinja2 web frontend, a jQuery dynamic UI, and Fabric deployment automation.
A full Airbnb clone built from scratch in phases at ALX/Holberton. The project started as a command-line CRUD interface backed by a JSON file and grew into a complete web application: a REST API with Swagger documentation, a Flask frontend with Jinja2 templates, a jQuery dynamic interface, and automated deployment to two remote web servers using Fabric. Two interchangeable storage backends — file storage and MySQL via SQLAlchemy — can be swapped at runtime via a single environment variable.
Project Structure
.
├── models/
│ ├── base_model.py # UUID, timestamps, to_dict, save
│ ├── user.py # User — MD5 password hashing via __setattr__
│ ├── state.py # State — relationship (db) or property (file)
│ ├── city.py # City — places relationship
│ ├── place.py # Place — many-to-many via place_amenity table
│ ├── review.py
│ ├── amenity.py
│ └── engine/
│ ├── file_storage.py # JSON serialization, get/count
│ └── db_storage.py # SQLAlchemy scoped_session, get/count
├── api/
│ └── v1/
│ ├── app.py # Flask app, CORS, Swagger, teardown_appcontext
│ └── views/
│ ├── index.py # /status, /stats
│ ├── states.py
│ ├── cities.py
│ ├── places.py # CRUD + POST /places_search
│ ├── places_reviews.py
│ ├── places_amenities.py # storage-aware many-to-many
│ ├── amenities.py
│ └── users.py
├── web_flask/ # Server-rendered Flask frontend (Jinja2)
├── web_dynamic/ # jQuery dynamic frontend + REST API
├── web_static/ # Pure HTML/CSS mockups
├── 1-pack_web_static.py # Fabric: create timestamped .tgz
├── 2-do_deploy_web_static.py # Fabric: upload + symlink on web servers
└── 3-deploy_web_static.py # Fabric: pack + deploy in one command
The Dual Storage Backend
The most important architectural decision in the project is that storage is
completely swappable. models/__init__.py reads HBNB_TYPE_STORAGE at import
time and instantiates either FileStorage or DBStorage. The rest of the
codebase uses only the storage object — never touching the filesystem or
database directly:
# models/__init__.py
from os import getenv
storage_t = getenv("HBNB_TYPE_STORAGE")
if storage_t == "db":
from models.engine.db_storage import DBStorage
storage = DBStorage()
else:
from models.engine.file_storage import FileStorage
storage = FileStorage()
storage.reload()
Every model class has a matching if models.storage_t == 'db' branch that
switches between SQLAlchemy column definitions and plain Python class attributes.
City is the simplest example:
# models/city.py
class City(BaseModel, Base):
if models.storage_t == "db":
__tablename__ = 'cities'
state_id = Column(String(60), ForeignKey('states.id'), nullable=False)
name = Column(String(128), nullable=False)
places = relationship("Place",
backref="cities",
cascade="all, delete, delete-orphan")
else:
state_id = ""
name = ""
State shows the relationship side of this pattern — a SQLAlchemy relationship
in db mode, but a computed @property in file mode that scans the storage
dictionary:
# models/state.py
class State(BaseModel, Base):
if models.storage_t == "db":
__tablename__ = 'states'
name = Column(String(128), nullable=False)
cities = relationship("City",
backref="state",
cascade="all, delete, delete-orphan")
else:
name = ""
if models.storage_t != "db":
@property
def cities(self):
"""File mode: scan storage for cities matching this state_id"""
city_list = []
all_cities = models.storage.all(City)
for city in all_cities.values():
if city.state_id == self.id:
city_list.append(city)
return city_list
state.cities works identically from outside the model regardless of which
backend is active — callers never need to know.
BaseModel — Identity, Timestamps, Serialization
Every domain object inherits from BaseModel. It handles UUID generation,
created_at/updated_at timestamps, dict serialization, and storage
delegation. The to_dict method strips SQLAlchemy's internal _sa_instance_state
key, formats datetimes as ISO strings, and always includes __class__ so
deserializing from JSON can reconstruct the right type. The save_fs flag
controls whether password is stripped — it's set to 1 when writing to disk
so passwords persist, but absent in API responses:
# models/base_model.py
def to_dict(self, save_fs=None):
new_dict = self.__dict__.copy()
if "created_at" in new_dict:
new_dict["created_at"] = new_dict["created_at"].strftime(time)
if "updated_at" in new_dict:
new_dict["updated_at"] = new_dict["updated_at"].strftime(time)
new_dict["__class__"] = self.__class__.__name__
if "_sa_instance_state" in new_dict:
del new_dict["_sa_instance_state"]
if save_fs is None:
if "password" in new_dict:
del new_dict["password"] # never leak password in API responses
return new_dict
def save(self):
self.updated_at = datetime.utcnow()
models.storage.new(self)
models.storage.save()
User overrides __setattr__ to transparently MD5-hash passwords on
assignment — no special handling needed anywhere else:
# models/user.py
def __setattr__(self, name, value):
if name == "password":
value = md5(value.encode()).hexdigest()
super().__setattr__(name, value)
Place — The Many-to-Many Association Table
Place is the most complex model. It has integer and float attributes, two
relationships (reviews, amenities), and the place_amenity association table
that links it to Amenity. The association table is only defined in db mode:
# models/place.py
if models.storage_t == 'db':
place_amenity = Table('place_amenity', Base.metadata,
Column('place_id', String(60),
ForeignKey('places.id',
onupdate='CASCADE',
ondelete='CASCADE'),
primary_key=True),
Column('amenity_id', String(60),
ForeignKey('amenities.id',
onupdate='CASCADE',
ondelete='CASCADE'),
primary_key=True))
class Place(BaseModel, Base):
if models.storage_t == 'db':
__tablename__ = 'places'
city_id = Column(String(60), ForeignKey('cities.id'), nullable=False)
user_id = Column(String(60), ForeignKey('users.id'), nullable=False)
name = Column(String(128), nullable=False)
description = Column(String(1024), nullable=True)
number_rooms = Column(Integer, nullable=False, default=0)
number_bathrooms = Column(Integer, nullable=False, default=0)
max_guest = Column(Integer, nullable=False, default=0)
price_by_night = Column(Integer, nullable=False, default=0)
latitude = Column(Float, nullable=True)
longitude = Column(Float, nullable=True)
reviews = relationship("Review",
backref="place",
cascade="all, delete, delete-orphan")
amenities = relationship("Amenity",
secondary=place_amenity,
viewonly=False)
else:
city_id = user_id = name = description = ""
number_rooms = number_bathrooms = max_guest = price_by_night = 0
latitude = longitude = 0.0
amenity_ids = []
if models.storage_t != 'db':
@property
def reviews(self):
from models.review import Review
return [r for r in models.storage.all(Review).values()
if r.place_id == self.id]
@property
def amenities(self):
from models.amenity import Amenity
return [a for a in models.storage.all(Amenity).values()
if a.place_id == self.id]
FileStorage
FileStorage serializes all objects to a single JSON file. Keys follow the
format ClassName.id. The save method passes save_fs=1 so passwords are
included on disk:
# models/engine/file_storage.py
class FileStorage:
__file_path = "file.json"
__objects = {}
def all(self, cls=None):
if cls is not None:
return {k: v for k, v in self.__objects.items()
if cls == v.__class__ or cls == v.__class__.__name__}
return self.__objects
def new(self, obj):
key = obj.__class__.__name__ + "." + obj.id
self.__objects[key] = obj
def save(self):
json_objects = {}
for key in self.__objects:
json_objects[key] = self.__objects[key].to_dict(save_fs=1)
with open(self.__file_path, 'w') as f:
json.dump(json_objects, f)
def reload(self):
try:
with open(self.__file_path, 'r') as f:
jo = json.load(f)
for key in jo:
self.__objects[key] = classes[jo[key]["__class__"]](**jo[key])
except:
pass # file doesn't exist on first run — silently skip
def get(self, cls, id):
if cls not in classes.values():
return None
for value in models.storage.all(cls).values():
if value.id == id:
return value
return None
def count(self, cls=None):
if not cls:
return sum(len(models.storage.all(c).values())
for c in classes.values())
return len(models.storage.all(cls).values())
DBStorage
DBStorage wraps SQLAlchemy with a scoped_session — one session per thread,
automatically cleaned up on close(). The engine is assembled from four
environment variables, and in test mode all tables are dropped on startup:
# models/engine/db_storage.py
class DBStorage:
__engine = None
__session = None
def __init__(self):
self.__engine = create_engine('mysql+mysqldb://{}:{}@{}/{}'.format(
getenv('HBNB_MYSQL_USER'),
getenv('HBNB_MYSQL_PWD'),
getenv('HBNB_MYSQL_HOST'),
getenv('HBNB_MYSQL_DB')
))
if getenv('HBNB_ENV') == "test":
Base.metadata.drop_all(self.__engine)
def reload(self):
Base.metadata.create_all(self.__engine)
sess_factory = sessionmaker(bind=self.__engine, expire_on_commit=False)
Session = scoped_session(sess_factory)
self.__session = Session
def close(self):
self.__session.remove() # called on Flask teardown_appcontext
expire_on_commit=False is important — without it, accessing any attribute
after a commit() triggers a lazy reload that fails once the request context
has closed.
REST API
The Flask API is registered as a Blueprint with the /api/v1 prefix. CORS is
enabled for all origins on /api/v1/*, and Swagger UI is served at /apidocs
via Flasgger. The teardown_appcontext hook ensures the storage session is
returned to the pool after every request:
# api/v1/app.py
app = Flask(__name__)
app.config['JSONIFY_PRETTYPRINT_REGULAR'] = True
app.register_blueprint(app_views)
cors = CORS(app, resources={r"/api/v1/*": {"origins": "*"}})
app.config['SWAGGER'] = {'title': 'AirBnB clone Restful API', 'uiversion': 3}
Swagger(app)
@app.teardown_appcontext
def close_db(error):
storage.close()
@app.errorhandler(404)
def not_found(error):
return make_response(jsonify({'error': "Not found"}), 404)
The Blueprint views/__init__.py uses wildcard imports so each view module
registers its routes directly onto app_views with no additional wiring:
# api/v1/views/__init__.py
app_views = Blueprint('app_views', __name__, url_prefix='/api/v1')
from api.v1.views.index import *
from api.v1.views.states import *
from api.v1.views.cities import *
from api.v1.views.places import *
from api.v1.views.places_reviews import *
from api.v1.views.amenities import *
from api.v1.views.users import *
from api.v1.views.places_amenities import *
Status and object count endpoints:
# api/v1/views/index.py
@app_views.route('/status', methods=['GET'], strict_slashes=False)
def status():
return jsonify({"status": "OK"})
@app_views.route('/stats', methods=['GET'], strict_slashes=False)
def number_objects():
classes = [Amenity, City, Place, Review, State, User]
names = ["amenities", "cities", "places", "reviews", "states", "users"]
return jsonify({names[i]: storage.count(classes[i]) for i in range(len(classes))})
CRUD Routes — The Consistent Pattern
Every resource follows the same four-method pattern. States is the clearest
example — no parent resource to validate:
# api/v1/views/states.py
@app_views.route('/states', methods=['GET'], strict_slashes=False)
def get_states():
return jsonify([s.to_dict() for s in storage.all(State).values()])
@app_views.route('/states/<state_id>', methods=['GET'], strict_slashes=False)
def get_state(state_id):
state = storage.get(State, state_id)
if not state:
abort(404)
return jsonify(state.to_dict())
@app_views.route('/states/<state_id>', methods=['DELETE'], strict_slashes=False)
def delete_state(state_id):
state = storage.get(State, state_id)
if not state:
abort(404)
storage.delete(state)
storage.save()
return make_response(jsonify({}), 200)
@app_views.route('/states', methods=['POST'], strict_slashes=False)
def post_state():
if not request.get_json():
abort(400, description="Not a JSON")
if 'name' not in request.get_json():
abort(400, description="Missing name")
instance = State(**request.get_json())
instance.save()
return make_response(jsonify(instance.to_dict()), 201)
@app_views.route('/states/<state_id>', methods=['PUT'], strict_slashes=False)
def put_state(state_id):
state = storage.get(State, state_id)
if not state:
abort(404)
if not request.get_json():
abort(400, description="Not a JSON")
ignore = ['id', 'created_at', 'updated_at']
for key, value in request.get_json().items():
if key not in ignore:
setattr(state, key, value)
storage.save()
return make_response(jsonify(state.to_dict()), 200)
Nested resources validate both parent and child. post_city confirms the
state_id in the URL resolves before creating the city:
# api/v1/views/cities.py
@app_views.route('/states/<state_id>/cities', methods=['POST'],
strict_slashes=False)
def post_city(state_id):
state = storage.get(State, state_id)
if not state:
abort(404) # parent must exist first
if not request.get_json():
abort(400, description="Not a JSON")
if 'name' not in request.get_json():
abort(400, description="Missing name")
data = request.get_json()
instance = City(**data)
instance.state_id = state.id # attach to parent
instance.save()
return make_response(jsonify(instance.to_dict()), 201)
Place creation has an additional validation layer — user_id must be in the
body AND the referenced user must actually exist in storage:
# api/v1/views/places.py
@app_views.route('/cities/<city_id>/places', methods=['POST'],
strict_slashes=False)
def post_place(city_id):
city = storage.get(City, city_id)
if not city:
abort(404)
if not request.get_json():
abort(400, description="Not a JSON")
if 'user_id' not in request.get_json():
abort(400, description="Missing user_id")
data = request.get_json()
user = storage.get(User, data['user_id'])
if not user:
abort(404) # user_id provided but user doesn't exist
if 'name' not in request.get_json():
abort(400, description="Missing name")
data["city_id"] = city_id
instance = Place(**data)
instance.save()
return make_response(jsonify(instance.to_dict()), 201)
Place–Amenity Many-to-Many — Storage-Aware Linking
The places_amenities view is the most storage-aware in the entire API. Both
GET and the link/unlink operations check HBNB_TYPE_STORAGE to use either the
SQLAlchemy relationship or the flat amenity_ids list:
# api/v1/views/places_amenities.py
@app_views.route('/places/<place_id>/amenities/<amenity_id>',
methods=['POST'], strict_slashes=False)
def post_place_amenity(place_id, amenity_id):
place = storage.get(Place, place_id)
amenity = storage.get(Amenity, amenity_id)
if not place or not amenity:
abort(404)
if environ.get('HBNB_TYPE_STORAGE') == "db":
if amenity in place.amenities:
return make_response(jsonify(amenity.to_dict()), 200) # already linked
place.amenities.append(amenity)
else:
if amenity_id in place.amenity_ids:
return make_response(jsonify(amenity.to_dict()), 200) # already linked
place.amenity_ids.append(amenity_id)
storage.save()
return make_response(jsonify(amenity.to_dict()), 201) # newly linked = 201
The 200 vs 201 distinction makes the endpoint idempotent — callers can POST the same link twice without error, but can tell whether anything changed.
POST /places_search
The most complex endpoint in the API. It accepts an optional JSON body with
states, cities, and amenities arrays. An absent or empty body returns
all places. Otherwise it collects places from the specified states (expanded
via their cities) and the directly-specified cities, deduplicates, then filters
down to only places that have every requested amenity:
# api/v1/views/places.py
@app_views.route('/places_search', methods=['POST'], strict_slashes=False)
def places_search():
if request.get_json() is None:
abort(400, description="Not a JSON")
data = request.get_json()
states = data.get('states', None)
cities = data.get('cities', None)
amenities = data.get('amenities', None)
# No filters — return everything
if not data or not len(data) or (not states and not cities and not amenities):
return jsonify([p.to_dict() for p in storage.all(Place).values()])
list_places = []
# Expand states → cities → places
if states:
for state in [storage.get(State, s_id) for s_id in states]:
if state:
for city in state.cities:
for place in city.places:
list_places.append(place)
# Directly specified cities — deduplicate against state expansion
if cities:
for city in [storage.get(City, c_id) for c_id in cities]:
if city:
for place in city.places:
if place not in list_places:
list_places.append(place)
# Filter by required amenities — place must have ALL of them
if amenities:
if not list_places:
list_places = list(storage.all(Place).values())
amenities_obj = [storage.get(Amenity, a_id) for a_id in amenities]
list_places = [p for p in list_places
if all(am in p.amenities for am in amenities_obj)]
# Strip amenities key from response — not part of the Place wire format
places = []
for p in list_places:
d = p.to_dict()
d.pop('amenities', None)
places.append(d)
return jsonify(places)
Swagger Documentation
Every view is decorated with @swag_from pointing to a YAML file in
documentation/. Flasgger assembles these into a browsable UI at /apidocs.
The states GET YAML:
# api/v1/views/documentation/state/get_state.yml
Gets the list of all states
---
tags:
- States
responses:
200:
description: Successful request
schema:
type: array
items:
properties:
__class__:
type: string
created_at:
type: string
updated_at:
type: string
id:
type: string
description: The uuid of the state instance
name:
type: string
description: State name
Each view file references its YAML by relative path, which keeps the HTTP handler code clean:
@app_views.route('/states', methods=['GET'], strict_slashes=False)
@swag_from('documentation/state/get_state.yml', methods=['GET'])
def get_states():
...
Flask Web Frontend (Jinja2)
web_flask/ contains a server-rendered frontend. Route variables are typed —
/number/<int:n> only matches integers, so Flask returns a 404 for non-numeric
paths automatically:
# web_flask/4-number_route.py
@app.route('/number/<int:n>', strict_slashes=False)
def is_n_number(n):
return "{:d} is a number".format(n)
The states list page fetches directly from storage, sorts by name, and renders
via Jinja2. teardown_appcontext closes the storage session after every request:
# web_flask/7-states_list.py
@app.teardown_appcontext
def close_db(error):
storage.close()
@app.route('/states_list', strict_slashes=False)
def states_list():
states = sorted(storage.all(State).values(), key=lambda k: k.name)
return render_template('7-states_list.html', states=states)
The cities-by-states page builds a [state, [sorted_cities]] list before
passing to the template:
# web_flask/8-cities_by_states.py
@app.route('/cities_by_states', strict_slashes=False)
def cities_list():
states = sorted(storage.all(State).values(), key=lambda k: k.name)
st_ct = [[state, sorted(state.cities, key=lambda k: k.name)]
for state in states]
return render_template('8-cities_by_states.html', states=st_ct, h_1="States")
<!-- web_flask/templates/8-cities_by_states.html -->
{% for state in states %}
<li>
{{ state[0].id }}: <b>{{ state[0].name }}</b>
<ul>
{% for city in state[1] %}
<li>{{ city.id }}: <b>{{ city.name }}</b></li>
{% endfor %}
</ul>
</li>
{% endfor %}
Dynamic Frontend (jQuery + REST API)
web_dynamic/ replaces Jinja2 rendering with jQuery AJAX calls. The Flask
routes still serve the HTML shell and inject a fresh uuid.uuid4() per request
as cache_id, appended to every static asset URL as a query string — this
busts browser cache on every deploy without a build step:
# web_dynamic/100-hbnb.py
@app.route('/100-hbnb/', strict_slashes=False)
def hbnb():
cache_id = uuid.uuid4() # new UUID per request
return render_template('100-hbnb.html', ..., cache_id=cache_id)
<!-- Every static asset gets the cache-busting query string -->
<link rel="stylesheet" href="../static/styles/4-common.css?{{ cache_id }}" />
<script src="../static/scripts/100-hbnb.js?{{ cache_id }}"></script>
The full dynamic script tracks selected amenities, states, and cities as arrays,
POSTs them to /api/v1/places_search on button click, and dynamically renders
the results. The API status indicator turns Airbnb-red when the status ping
succeeds:
// web_dynamic/static/scripts/100-hbnb.js (abbreviated)
$(document).ready(function () {
let myAmenities = [],
myStates = [],
myCities = [];
// Track amenity checkbox state, update display label
$('.amenities .popover input[type=checkbox]').click(function () {
const myListName = [];
myAmenities = [];
$('.amenities .popover input[type=checkbox]:checked').each(function () {
myListName.unshift($(this).attr('data-name'));
myAmenities.unshift($(this).attr('data-id'));
});
$('.amenities h4').text(myListName.length === 0 ? '\u00a0' : myListName.join(', '));
});
// POST search with current filter state
$('.filters button').click(function (event) {
event.preventDefault();
$('.places').text('');
listPlaces(
JSON.stringify({
amenities: myAmenities,
states: myStates,
cities: myCities,
}),
);
});
// API status indicator
$.ajax({
url: 'http://0.0.0.0:5001/api/v1/status/',
type: 'GET',
success: function () {
$('#api_status').addClass('available');
},
});
listPlaces(); // load all places on init
});
function listPlaces(consult = '{}') {
$.ajax({
type: 'POST',
url: 'http://0.0.0.0:5001/api/v1/places_search',
dataType: 'json',
data: consult,
contentType: 'application/json; charset=utf-8',
success: function (places) {
for (let place of places) {
$('.places').append(`
<article>
<div class="title_box">
<h2>${place.name}</h2>
<div class="price_by_night">${place.price_by_night}</div>
</div>
<div class="information">
<div class="max_guest">
${place.max_guest} ${place.max_guest > 1 ? 'Guests' : 'Guest'}
</div>
<div class="number_rooms">
${place.number_rooms} ${place.number_rooms > 1 ? 'Bedrooms' : 'Bedroom'}
</div>
<div class="number_bathrooms">
${place.number_bathrooms} ${place.number_bathrooms > 1 ? 'Bathrooms' : 'Bathroom'}
</div>
</div>
<div class="description">${place.description}</div>
</article>
`);
}
},
});
}
The CSS indicator uses a class override — .available sets the Airbnb-red color
on a circle div that's gray by default:
/* web_dynamic/static/styles/3-header.css */
#api_status {
background-color: #ccc;
height: 40px;
width: 40px;
border-radius: 50px;
}
.available {
background-color: #ff545f !important;
}
Fabric Deployment
Three Fabric scripts automate the full deploy cycle. do_pack creates a
timestamped .tgz archive:
# 1-pack_web_static.py
def do_pack():
try:
date = datetime.now().strftime("%Y%m%d%H%M%S")
if isdir("versions") is False:
local("mkdir versions")
file_name = "versions/web_static_{}.tgz".format(date)
local("tar -cvzf {} web_static".format(file_name))
return file_name
except:
return None
do_deploy uploads the archive to both servers, extracts it into a versioned
directory, moves contents up out of the web_static/ subdirectory that tar
creates, and atomically swaps the current symlink — zero-downtime deploy:
# 2-do_deploy_web_static.py
env.hosts = ['142.44.167.228', '144.217.246.195']
def do_deploy(archive_path):
if exists(archive_path) is False:
return False
try:
file_n = archive_path.split("/")[-1] # web_static_20240601.tgz
no_ext = file_n.split(".")[0] # web_static_20240601
path = "/data/web_static/releases/"
put(archive_path, '/tmp/')
run('mkdir -p {}{}/'.format(path, no_ext))
run('tar -xzf /tmp/{} -C {}{}/'.format(file_n, path, no_ext))
run('rm /tmp/{}'.format(file_n))
# tar creates web_static/ subdirectory — move contents up one level
run('mv {0}{1}/web_static/* {0}{1}/'.format(path, no_ext))
run('rm -rf {}{}/web_static'.format(path, no_ext))
# atomic symlink swap
run('rm -rf /data/web_static/current')
run('ln -s {}{}/ /data/web_static/current'.format(path, no_ext))
return True
except:
return False
deploy() combines both into one command. Running
fab -f 3-deploy_web_static.py deploy packs the current web_static/, uploads
it to both servers in parallel, and swaps the symlink. Rollback is just pointing
the symlink at a previous versioned directory:
# 3-deploy_web_static.py
def deploy():
archive_path = do_pack()
if archive_path is None:
return False
return do_deploy(archive_path)
Testing
The test suite uses unittest and enforces PEP8 via pep8 checks embedded in
the test cases. TestBaseModel verifies UUID format with a regex, checks that
updated_at changes on save() while created_at stays constant, and uses
mock.patch to confirm storage is called without touching the filesystem:
# tests/test_models/test_base_model.py
def test_uuid(self):
inst1 = BaseModel()
inst2 = BaseModel()
for inst in [inst1, inst2]:
self.assertRegex(inst.id,
'^[0-9a-f]{8}-[0-9a-f]{4}'
'-[0-9a-f]{4}-[0-9a-f]{4}'
'-[0-9a-f]{12}$')
self.assertNotEqual(inst1.id, inst2.id)
@mock.patch('models.storage')
def test_save(self, mock_storage):
inst = BaseModel()
old_updated_at = inst.updated_at
old_created_at = inst.created_at
inst.save()
self.assertNotEqual(old_updated_at, inst.updated_at) # updated_at changed
self.assertEqual(old_created_at, inst.created_at) # created_at unchanged
self.assertTrue(mock_storage.new.called)
self.assertTrue(mock_storage.save.called)
Storage tests use @unittest.skipIf to run only against the active backend,
so the same test suite works with both storage engines:
# tests/test_models/test_engine/test_file_storage.py
@unittest.skipIf(models.storage_t == 'db', "not testing file storage")
def test_get(self):
storage = FileStorage()
instance = State(name="Vecindad")
storage.new(instance)
storage.save()
get_instance = storage.get(State, instance.id)
self.assertEqual(get_instance, instance)
@unittest.skipIf(models.storage_t == 'db', "not testing file storage")
def test_count(self):
storage = FileStorage()
state = State(name="Vecindad")
city = City(name="Mexico")
storage.new(state)
storage.new(city)
storage.save()
self.assertEqual(len(storage.all()), storage.count())