Skip to content
Snippets Groups Projects
Select Git revision
  • 176ddd89171ddcf661862d90c5d257877f7326d6
  • vme-testing default
  • ci-test
  • master
  • remoteproc
  • am625-sk-ov5640
  • pcal6534-upstreaming
  • lps22df-upstreaming
  • msc-upstreaming
  • imx8mp
  • iio/noa1305
  • vme-next
  • vme-next-4.14-rc4
  • v4.14-rc4
  • v4.14-rc3
  • v4.14-rc2
  • v4.14-rc1
  • v4.13
  • vme-next-4.13-rc7
  • v4.13-rc7
  • v4.13-rc6
  • v4.13-rc5
  • v4.13-rc4
  • v4.13-rc3
  • v4.13-rc2
  • v4.13-rc1
  • v4.12
  • v4.12-rc7
  • v4.12-rc6
  • v4.12-rc5
  • v4.12-rc4
  • v4.12-rc3
32 results

sas_ata.c

Blame
    • Jolly Shah's avatar
      176ddd89
      scsi: libsas: Reset num_scatter if libata marks qc as NODATA · 176ddd89
      Jolly Shah authored
      When the cache_type for the SCSI device is changed, the SCSI layer issues a
      MODE_SELECT command. The caching mode details are communicated via a
      request buffer associated with the SCSI command with data direction set as
      DMA_TO_DEVICE (scsi_mode_select()). When this command reaches the libata
      layer, as a part of generic initial setup, libata layer sets up the
      scatterlist for the command using the SCSI command (ata_scsi_qc_new()).
      This command is then translated by the libata layer into
      ATA_CMD_SET_FEATURES (ata_scsi_mode_select_xlat()). The libata layer treats
      this as a non-data command (ata_mselect_caching()), since it only needs an
      ATA taskfile to pass the caching on/off information to the device. It does
      not need the scatterlist that has been setup, so it does not perform
      dma_map_sg() on the scatterlist (ata_qc_issue()). Unfortunately, when this
      command reaches the libsas layer (sas_ata_qc_issue()), libsas layer sees it
      as a non-data command with a scatterlist. It cannot extract the correct DMA
      length since the scatterlist has not been mapped with dma_map_sg() for a
      DMA operation. When this partially constructed SAS task reaches pm80xx
      LLDD, it results in the following warning:
      
      "pm80xx_chip_sata_req 6058: The sg list address
      start_addr=0x0000000000000000 data_len=0x0end_addr_high=0xffffffff
      end_addr_low=0xffffffff has crossed 4G boundary"
      
      Update libsas to handle ATA non-data commands separately so num_scatter and
      total_xfer_len remain 0.
      
      Link: https://lore.kernel.org/r/20210318225632.2481291-1-jollys@google.com
      
      
      Fixes: 53de092f ("scsi: libsas: Set data_dir as DMA_NONE if libata marks qc as NODATA")
      Tested-by: default avatarLuo Jiaxing <luojiaxing@huawei.com>
      Reviewed-by: default avatarJohn Garry <john.garry@huawei.com>
      Signed-off-by: default avatarJolly Shah <jollys@google.com>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      176ddd89
      History
      scsi: libsas: Reset num_scatter if libata marks qc as NODATA
      Jolly Shah authored
      When the cache_type for the SCSI device is changed, the SCSI layer issues a
      MODE_SELECT command. The caching mode details are communicated via a
      request buffer associated with the SCSI command with data direction set as
      DMA_TO_DEVICE (scsi_mode_select()). When this command reaches the libata
      layer, as a part of generic initial setup, libata layer sets up the
      scatterlist for the command using the SCSI command (ata_scsi_qc_new()).
      This command is then translated by the libata layer into
      ATA_CMD_SET_FEATURES (ata_scsi_mode_select_xlat()). The libata layer treats
      this as a non-data command (ata_mselect_caching()), since it only needs an
      ATA taskfile to pass the caching on/off information to the device. It does
      not need the scatterlist that has been setup, so it does not perform
      dma_map_sg() on the scatterlist (ata_qc_issue()). Unfortunately, when this
      command reaches the libsas layer (sas_ata_qc_issue()), libsas layer sees it
      as a non-data command with a scatterlist. It cannot extract the correct DMA
      length since the scatterlist has not been mapped with dma_map_sg() for a
      DMA operation. When this partially constructed SAS task reaches pm80xx
      LLDD, it results in the following warning:
      
      "pm80xx_chip_sata_req 6058: The sg list address
      start_addr=0x0000000000000000 data_len=0x0end_addr_high=0xffffffff
      end_addr_low=0xffffffff has crossed 4G boundary"
      
      Update libsas to handle ATA non-data commands separately so num_scatter and
      total_xfer_len remain 0.
      
      Link: https://lore.kernel.org/r/20210318225632.2481291-1-jollys@google.com
      
      
      Fixes: 53de092f ("scsi: libsas: Set data_dir as DMA_NONE if libata marks qc as NODATA")
      Tested-by: default avatarLuo Jiaxing <luojiaxing@huawei.com>
      Reviewed-by: default avatarJohn Garry <john.garry@huawei.com>
      Signed-off-by: default avatarJolly Shah <jollys@google.com>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
    drm_lock.c 10.91 KiB
    /**
     * \file drm_lock.c
     * IOCTLs for locking
     *
     * \author Rickard E. (Rik) Faith <faith@valinux.com>
     * \author Gareth Hughes <gareth@valinux.com>
     */
    
    /*
     * Created: Tue Feb  2 08:37:54 1999 by faith@valinux.com
     *
     * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
     * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
     * All Rights Reserved.
     *
     * Permission is hereby granted, free of charge, to any person obtaining a
     * copy of this software and associated documentation files (the "Software"),
     * to deal in the Software without restriction, including without limitation
     * the rights to use, copy, modify, merge, publish, distribute, sublicense,
     * and/or sell copies of the Software, and to permit persons to whom the
     * Software is furnished to do so, subject to the following conditions:
     *
     * The above copyright notice and this permission notice (including the next
     * paragraph) shall be included in all copies or substantial portions of the
     * Software.
     *
     * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
     * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
     * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
     * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
     * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
     * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
     * OTHER DEALINGS IN THE SOFTWARE.
     */
    
    #include "drmP.h"
    
    static int drm_notifier(void *priv);
    
    /**
     * Lock ioctl.
     *
     * \param inode device inode.
     * \param file_priv DRM file private.
     * \param cmd command.
     * \param arg user argument, pointing to a drm_lock structure.
     * \return zero on success or negative number on failure.
     *
     * Add the current task to the lock wait queue, and attempt to take to lock.
     */
    int drm_lock(struct drm_device *dev, void *data, struct drm_file *file_priv)
    {
    	DECLARE_WAITQUEUE(entry, current);
    	struct drm_lock *lock = data;
    	struct drm_master *master = file_priv->master;
    	int ret = 0;
    
    	++file_priv->lock_count;
    
    	if (lock->context == DRM_KERNEL_CONTEXT) {
    		DRM_ERROR("Process %d using kernel context %d\n",
    			  task_pid_nr(current), lock->context);
    		return -EINVAL;
    	}
    
    	DRM_DEBUG("%d (pid %d) requests lock (0x%08x), flags = 0x%08x\n",
    		  lock->context, task_pid_nr(current),
    		  master->lock.hw_lock->lock, lock->flags);
    
    	if (drm_core_check_feature(dev, DRIVER_DMA_QUEUE))
    		if (lock->context < 0)
    			return -EINVAL;
    
    	add_wait_queue(&master->lock.lock_queue, &entry);
    	spin_lock_bh(&master->lock.spinlock);
    	master->lock.user_waiters++;
    	spin_unlock_bh(&master->lock.spinlock);
    
    	for (;;) {
    		__set_current_state(TASK_INTERRUPTIBLE);
    		if (!master->lock.hw_lock) {
    			/* Device has been unregistered */
    			send_sig(SIGTERM, current, 0);
    			ret = -EINTR;
    			break;
    		}
    		if (drm_lock_take(&master->lock, lock->context)) {
    			master->lock.file_priv = file_priv;
    			master->lock.lock_time = jiffies;
    			atomic_inc(&dev->counts[_DRM_STAT_LOCKS]);
    			break;	/* Got lock */
    		}
    
    		/* Contention */
    		mutex_unlock(&drm_global_mutex);
    		schedule();
    		mutex_lock(&drm_global_mutex);
    		if (signal_pending(current)) {
    			ret = -EINTR;
    			break;
    		}
    	}
    	spin_lock_bh(&master->lock.spinlock);
    	master->lock.user_waiters--;
    	spin_unlock_bh(&master->lock.spinlock);
    	__set_current_state(TASK_RUNNING);
    	remove_wait_queue(&master->lock.lock_queue, &entry);
    
    	DRM_DEBUG("%d %s\n", lock->context,
    		  ret ? "interrupted" : "has lock");
    	if (ret) return ret;
    
    	/* don't set the block all signals on the master process for now 
    	 * really probably not the correct answer but lets us debug xkb
     	 * xserver for now */
    	if (!file_priv->is_master) {
    		sigemptyset(&dev->sigmask);
    		sigaddset(&dev->sigmask, SIGSTOP);
    		sigaddset(&dev->sigmask, SIGTSTP);
    		sigaddset(&dev->sigmask, SIGTTIN);
    		sigaddset(&dev->sigmask, SIGTTOU);
    		dev->sigdata.context = lock->context;
    		dev->sigdata.lock = master->lock.hw_lock;
    		block_all_signals(drm_notifier, &dev->sigdata, &dev->sigmask);
    	}
    
    	if (dev->driver->dma_ready && (lock->flags & _DRM_LOCK_READY))
    		dev->driver->dma_ready(dev);
    
    	if (dev->driver->dma_quiescent && (lock->flags & _DRM_LOCK_QUIESCENT))
    	{
    		if (dev->driver->dma_quiescent(dev)) {
    			DRM_DEBUG("%d waiting for DMA quiescent\n",
    				  lock->context);
    			return -EBUSY;
    		}
    	}
    
    	if (dev->driver->kernel_context_switch &&
    	    dev->last_context != lock->context) {
    		dev->driver->kernel_context_switch(dev, dev->last_context,
    						   lock->context);
    	}
    
    	return 0;
    }
    
    /**
     * Unlock ioctl.
     *
     * \param inode device inode.
     * \param file_priv DRM file private.
     * \param cmd command.
     * \param arg user argument, pointing to a drm_lock structure.
     * \return zero on success or negative number on failure.
     *
     * Transfer and free the lock.
     */
    int drm_unlock(struct drm_device *dev, void *data, struct drm_file *file_priv)
    {
    	struct drm_lock *lock = data;
    	struct drm_master *master = file_priv->master;
    
    	if (lock->context == DRM_KERNEL_CONTEXT) {
    		DRM_ERROR("Process %d using kernel context %d\n",
    			  task_pid_nr(current), lock->context);
    		return -EINVAL;
    	}
    
    	atomic_inc(&dev->counts[_DRM_STAT_UNLOCKS]);
    
    	/* kernel_context_switch isn't used by any of the x86 drm
    	 * modules but is required by the Sparc driver.
    	 */
    	if (dev->driver->kernel_context_switch_unlock)
    		dev->driver->kernel_context_switch_unlock(dev);
    	else {
    		if (drm_lock_free(&master->lock, lock->context)) {
    			/* FIXME: Should really bail out here. */
    		}
    	}
    
    	unblock_all_signals();
    	return 0;
    }
    
    /**
     * Take the heavyweight lock.
     *
     * \param lock lock pointer.
     * \param context locking context.
     * \return one if the lock is held, or zero otherwise.
     *
     * Attempt to mark the lock as held by the given context, via the \p cmpxchg instruction.
     */
    int drm_lock_take(struct drm_lock_data *lock_data,
    		  unsigned int context)
    {
    	unsigned int old, new, prev;
    	volatile unsigned int *lock = &lock_data->hw_lock->lock;
    
    	spin_lock_bh(&lock_data->spinlock);
    	do {
    		old = *lock;
    		if (old & _DRM_LOCK_HELD)
    			new = old | _DRM_LOCK_CONT;
    		else {
    			new = context | _DRM_LOCK_HELD |
    				((lock_data->user_waiters + lock_data->kernel_waiters > 1) ?
    				 _DRM_LOCK_CONT : 0);
    		}
    		prev = cmpxchg(lock, old, new);
    	} while (prev != old);
    	spin_unlock_bh(&lock_data->spinlock);
    
    	if (_DRM_LOCKING_CONTEXT(old) == context) {
    		if (old & _DRM_LOCK_HELD) {
    			if (context != DRM_KERNEL_CONTEXT) {
    				DRM_ERROR("%d holds heavyweight lock\n",
    					  context);
    			}
    			return 0;
    		}
    	}
    
    	if ((_DRM_LOCKING_CONTEXT(new)) == context && (new & _DRM_LOCK_HELD)) {
    		/* Have lock */
    		return 1;
    	}
    	return 0;
    }
    EXPORT_SYMBOL(drm_lock_take);
    
    /**
     * This takes a lock forcibly and hands it to context.	Should ONLY be used
     * inside *_unlock to give lock to kernel before calling *_dma_schedule.
     *
     * \param dev DRM device.
     * \param lock lock pointer.
     * \param context locking context.
     * \return always one.
     *
     * Resets the lock file pointer.
     * Marks the lock as held by the given context, via the \p cmpxchg instruction.
     */
    static int drm_lock_transfer(struct drm_lock_data *lock_data,
    			     unsigned int context)
    {
    	unsigned int old, new, prev;
    	volatile unsigned int *lock = &lock_data->hw_lock->lock;
    
    	lock_data->file_priv = NULL;
    	do {
    		old = *lock;
    		new = context | _DRM_LOCK_HELD;
    		prev = cmpxchg(lock, old, new);
    	} while (prev != old);
    	return 1;
    }
    
    /**
     * Free lock.
     *
     * \param dev DRM device.
     * \param lock lock.
     * \param context context.
     *
     * Resets the lock file pointer.
     * Marks the lock as not held, via the \p cmpxchg instruction. Wakes any task
     * waiting on the lock queue.
     */
    int drm_lock_free(struct drm_lock_data *lock_data, unsigned int context)
    {
    	unsigned int old, new, prev;
    	volatile unsigned int *lock = &lock_data->hw_lock->lock;
    
    	spin_lock_bh(&lock_data->spinlock);
    	if (lock_data->kernel_waiters != 0) {
    		drm_lock_transfer(lock_data, 0);
    		lock_data->idle_has_lock = 1;
    		spin_unlock_bh(&lock_data->spinlock);
    		return 1;
    	}
    	spin_unlock_bh(&lock_data->spinlock);
    
    	do {
    		old = *lock;
    		new = _DRM_LOCKING_CONTEXT(old);
    		prev = cmpxchg(lock, old, new);
    	} while (prev != old);
    
    	if (_DRM_LOCK_IS_HELD(old) && _DRM_LOCKING_CONTEXT(old) != context) {
    		DRM_ERROR("%d freed heavyweight lock held by %d\n",
    			  context, _DRM_LOCKING_CONTEXT(old));
    		return 1;
    	}
    	wake_up_interruptible(&lock_data->lock_queue);
    	return 0;
    }
    EXPORT_SYMBOL(drm_lock_free);
    
    /**
     * If we get here, it means that the process has called DRM_IOCTL_LOCK
     * without calling DRM_IOCTL_UNLOCK.
     *
     * If the lock is not held, then let the signal proceed as usual.  If the lock
     * is held, then set the contended flag and keep the signal blocked.
     *
     * \param priv pointer to a drm_sigdata structure.
     * \return one if the signal should be delivered normally, or zero if the
     * signal should be blocked.
     */
    static int drm_notifier(void *priv)
    {
    	struct drm_sigdata *s = (struct drm_sigdata *) priv;
    	unsigned int old, new, prev;
    
    	/* Allow signal delivery if lock isn't held */
    	if (!s->lock || !_DRM_LOCK_IS_HELD(s->lock->lock)
    	    || _DRM_LOCKING_CONTEXT(s->lock->lock) != s->context)
    		return 1;
    
    	/* Otherwise, set flag to force call to
    	   drmUnlock */
    	do {
    		old = s->lock->lock;
    		new = old | _DRM_LOCK_CONT;
    		prev = cmpxchg(&s->lock->lock, old, new);
    	} while (prev != old);
    	return 0;
    }
    
    /**
     * This function returns immediately and takes the hw lock
     * with the kernel context if it is free, otherwise it gets the highest priority when and if
     * it is eventually released.
     *
     * This guarantees that the kernel will _eventually_ have the lock _unless_ it is held
     * by a blocked process. (In the latter case an explicit wait for the hardware lock would cause
     * a deadlock, which is why the "idlelock" was invented).
     *
     * This should be sufficient to wait for GPU idle without
     * having to worry about starvation.
     */
    
    void drm_idlelock_take(struct drm_lock_data *lock_data)
    {
    	int ret = 0;
    
    	spin_lock_bh(&lock_data->spinlock);
    	lock_data->kernel_waiters++;
    	if (!lock_data->idle_has_lock) {
    
    		spin_unlock_bh(&lock_data->spinlock);
    		ret = drm_lock_take(lock_data, DRM_KERNEL_CONTEXT);
    		spin_lock_bh(&lock_data->spinlock);
    
    		if (ret == 1)
    			lock_data->idle_has_lock = 1;
    	}
    	spin_unlock_bh(&lock_data->spinlock);
    }
    EXPORT_SYMBOL(drm_idlelock_take);
    
    void drm_idlelock_release(struct drm_lock_data *lock_data)
    {
    	unsigned int old, prev;
    	volatile unsigned int *lock = &lock_data->hw_lock->lock;
    
    	spin_lock_bh(&lock_data->spinlock);
    	if (--lock_data->kernel_waiters == 0) {
    		if (lock_data->idle_has_lock) {
    			do {
    				old = *lock;
    				prev = cmpxchg(lock, old, DRM_KERNEL_CONTEXT);
    			} while (prev != old);
    			wake_up_interruptible(&lock_data->lock_queue);
    			lock_data->idle_has_lock = 0;
    		}
    	}
    	spin_unlock_bh(&lock_data->spinlock);
    }
    EXPORT_SYMBOL(drm_idlelock_release);
    
    
    int drm_i_have_hw_lock(struct drm_device *dev, struct drm_file *file_priv)
    {
    	struct drm_master *master = file_priv->master;
    	return (file_priv->lock_count && master->lock.hw_lock &&
    		_DRM_LOCK_IS_HELD(master->lock.hw_lock->lock) &&
    		master->lock.file_priv == file_priv);
    }
    
    EXPORT_SYMBOL(drm_i_have_hw_lock);