background preloader

Tahoe-LAFS

Tahoe-LAFS
Related:  Security

The Tin Hat | Simple Online Privacy Guides and Tutorials Clustered Filesystem with DRBD and OCFS2 on CentOS 5.5 - Tutorials / Howtos - Sysconfig's Wiki Months ago I've published Clustered Filesystem with DRBD and GFS2 on CentOS 5.4 in this wiki. Now I would like to give another option – OCFS2, which is a clustered filesystem developed by Oracle. OCFS initially was focused on use with Oracle's databases, but with OFCS2 it's a general-pupose cluster filesystem. It's available as open source, and Oracle does a quite good job publishing packages and modules for each and every kernel version (so you don't need to re-compile it over and over again when you update your kernel). You will notice that this howto is very similar to the GFS2 howto mentioned earlier. For this short tutorial, I assume that you have set up identical unused disk partitions on both nodes (sdf in this tutorial) ideally, the two nodes are connected via a distinct network link, and IP addresses have been assigned (I'm using 10.10.10.1 and 10.10.10.2 here) you are running CentOS 5.x on both nodes Unless stated otherwise, please do everything on both nodes! , for example:

Seafile DLFP: MooseFS, système de fichier réparti à tolérance de panne MooseFS est un système de fichiers distribué méconnu regorgeant de qualités. En vrac : Le code est distribué sous GPLv3 ; Il utilise FUSE et fonctionne en espace utilisateur ; Il dispose d'une poubelle automatique à durée de rétention modifiable à souhait ; Il est très simple à déployer et administrer : comptez une heure, lecture de la documentation comprise pour avoir un serveur maître et quatre serveurs de données fonctionnels ; Compatible POSIX, il ne requiert aucune modification des programmes pour pouvoir y accéder ; L'ajout de machines pour agrandir l'espace disponible est d'une simplicité enfantine ; Vous choisissez le nombre de réplicas que vous désirez, par fichier ou par répertoire, pour la tolérance de panne, avec une seule commande, le tout à chaud… Le développement de MooseFS a débuté en 2005, et il a été libéré le 30 mai 2008. MooseFS est un système de fichiers méconnu regorgeant de qualités : Un peu d'histoire. C'est stable ! Bien sûr, tout n'est pas si parfait.

JStylo-Anonymouth - PSAL From PSAL The JStylo and Anonymouth integrated open-source project (JSAN) resides on GitHub. What is JSAN? JSAN is a writing style analysis and anonymization framework. It consists of two parts: JStylo - authorship attribution framework Anonymouth - authorship evasion (anonymization) framework JStylo is used as an underlying feature extraction and authorship attribution engine for Anonymouth, which uses the extracted stylometric features and classification results obtained through JStylo and suggests users changes to anonymize their writing style. Details about JSAN: Use Fewer Instances of the Letter "i": Toward Writing Style Anonymization. Tutorial JSAN tutorial: Presented at 28c3 video Download Downloads: If you use JStylo and/or Anonymouth in your research, please cite: Andrew McDonald, Sadia Afroz, Aylin Caliskan, Ariel Stolerman and Rachel Greenstadt. If you use the corpus in your research, please cite: Michael Brennan and Rachel Greenstadt. Developers

Avoiding SPOF -- Clustered, replicated Storage with GlusterFS - Tutorials / Howtos - Sysconfig's Wiki In an environment where high availability is crucial, you won't be able to work without cross-machine replication in the long run. While normal RAIDs, backups, and networked file systems are reasonable ways to protect yourself from data loss and allow to access the same data from multiple hosts, they still leave you with a single point of failure, the storage itself. There are plenty of rather expensive solutions out there, which come with two machines and multiple NICs in one box, accessing the same RAID array (for example NetApp®). GlusterFS can help here. In this mini tutorial, I'd like to describe how to create a replicated clustered storage with two machines. This short tutorial is based on CentOS 5.4 x64, but the GlusterFS team also provides binaries for other RedHat based systems and Debian (including derivates), as well as the source code. So let's go... Obtaining and Installing the Binaries All relevant binaries can be found here:

ownCloud Features, Architecture and Requirements :: MooseFS network file system - Moose FS The 20 Coolest Jobs in Information Security #1 Information Security Crime Investigator/Forensics Expert#2 System, Network, and/or Web Penetration Tester#3 Forensic Analyst#4 Incident Responder#5 Security Architect#6 Malware Analyst#7 Network Security Engineer#8 Security Analyst#9 Computer Crime Investigator#10 CISO/ISO or Director of Security#11 Application Penetration Tester#12 Security Operations Center Analyst#13 Prosecutor Specializing in Information Security Crime#14 Technical Director and Deputy CISO#15 Intrusion Analyst#16 Vulnerability Researcher/ Exploit Developer#17 Security Auditor#18 Security-savvy Software Developer#19 Security Maven in an Application Developer Organization#20 Disaster Recovery/Business Continuity Analyst/Manager #1 - Information Security Crime Investigator/Forensics Expert - Top Gun Job The thrill of the hunt! You never encounter the same crime twice! Job Description SANS Courses Recommended Why It's Cool How It Makes a Difference How to Be Successful - Stay abreast of the latest attack methodologies.

Ceph: un système de fichier distribué à surveiller | InZeCloud.Fr A l’instar de MooseFS , Ceph est un système de fichier distribué. Son architecture matériel & logiciel repose sur: un groupe de serveurs de stockage (appelés OSDs), un ensemble de serveurs de métadonnée (MDS) et quelques services de monitoring qui ont pour vocation de gérer la cohérence de l’ensemble en fonction de l’états des différents noeuds. L’architecture logique, quant à elle, se compose de deux couches: la première est une couche d’abstraction intitulé RADOS qui fournit un espace de stockage distribué, évolutif (scalable) et à tolérance de panne. La seconde, Ceph lui même, utilise les services de la précédente pour stocker les fichiers et fournit un jeu de commande compatible POSIX pour y accéder. Il existe différentes méthodes pour intéragir avec Ceph: Les serveurs de metadonnées : il s’agit d’un ensemble de serveurs qui conservent les informations relatives au fichiers sous la forme de métadonnées et en propose l’accès via un point de montage. Mise en oeuvre d’un cluster Ceph

Related: